Vince Calhoun Founding Director & Distinguished University Professor, TReNDS Center
Vince D. Calhoun Dr. Calhoun is founding director of the tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS) and a Georgia Research Alliance eminent scholar in brain health and image analysis where he holds appointments at Georgia State University, Georgia Institution of Technology and Emory University. He was previously the President of the Mind Research Network and Distinguished Professor of Electrical and Computer Engineering at the University of New Mexico. He is the author of more than 800 full journal articles and over 850 technical reports, abstracts and conference proceedings. His work includes the development of flexible methods to analyze functional magnetic resonance imaging data such as independent component analysis (ICA), deep learning for neuroimaging, data fusion of multimodal imaging and genetics data, neuroinformatics tools, and the identification of biomarkers for disease. His research is funded by the NIH and NSF among other funding agencies. Dr. Calhoun is a fellow of the Institute of Electrical and Electronic Engineers, The American Association for the Advancement of Science, The American Institute of Biomedical and Medical Engineers, The American College of Neuropsychopharmacology, and the International Society of Magnetic Resonance in Medicine. He served at the chair for the Organization for Human Brain Mapping from 2018-2019 is a past chair of the IEEE Machine Learning for Signal Processing Technical Committee. He currently serves on the IEEE BISP Technical Committee and is also a member of IEEE Data Science Initiative Steering Committee.
Executive control processes and flexible behaviors rely on the integrity of, and dynamic interactions between, large-scale brain networks. The right insular cortex is a critical component of a salience/midcingulo-insular network that is thought to mediate interactions between brain networks involved in externally oriented (central executive/lateral frontoparietal network) and internally oriented (default mode/medial frontoparietal network) processes. How these brain systems reconfigure with development is a critical question for cognitive neuroscience, with implications for neurodevelopmental pathologies affecting brain connectivity. I will describe studies examining how brain network dynamics support flexible behaviors in typical and atypical development, presenting evidence suggesting a unique role for the dorsal anterior insular from studies of meta-analytic connectivity modeling, dynamic functional connectivity, and structural connectivity. These findings from adults, typically developing children, and children with autism suggest that structural and functional maturation of insular pathways is a critical component of the process by which human brain networks mature to support complex, flexible cognitive processes throughout the lifespan.
Lucina Uddin Associate Professor, Department of Psychology, University of Miami
After receiving a Ph.D. in cognitive neuroscience from the psychology department at UCLA in 2006, Dr. Uddin completed a postdoctoral fellowship at the Child Study Center at NYU. For several years she worked as a faculty member in Psychiatry & Behavioral Science at the Stanford School of Medicine. She joined the psychology department at the University of Miami in 2014. Within a cognitive neuroscience framework, Dr. Uddin’s research combines analyses of resting-state fMRI and diffusion weighted imaging data to examine the organization of large-scale brain networks supporting executive functions. Her current projects focus on understanding dynamic network interactions underlying cognitive inflexibility in neurodevelopmental disorders such as autism. Dr. Uddin’s work (over 125 publications) has been published in the Journal of Neuroscience, Cerebral Cortex, JAMA Psychiatry, Biological Psychiatry, PNAS, and Nature Reviews Neuroscience. She was awarded the Young Investigator award by the Organization for Human Brain Mapping in 2017.
Website: https://bccl.psy.miami.edu/
The Enhancing Neuroimaging Genetics through Meta-Analysis (ENIGMA) consortium is a worldwide, largely volunteer research collaboration that recently celebrated its 10th year. In that time, it has become a model of large-scale, imaging data re-use for meta- and mega-analysis, leading to numerous results in genetic effects on brain structure, clinical effects on brain structure and function, and methods for analysis of heterogenous structural and functional data. Clinical research domains range from schizophrenia to bipolar disorder to Parkinson's disease, sleep disorders, HIV and addiction, while methodological developments span gray matter volumes, subcortical shape, sulcal and gyral depth, white matter tract based statistics, and resting state analyses, with recommendations for applying an ENIGMA approach to task-based fMRI are in development. These efforts highlight both the value of combining data analyses, along with the challenges of combining data from studies collected for different reasons in different environments. I will review some of the current efforts underway across the more than 50 working groups within ENIGMA, along with the implications for these kinds of large-scale analyses in the future.
Jessica Turner Associate Professor, Neuroscience, Psychology, TReNDS Center & Georgia State University
Dr. Turner received her PhD in Psychology (Cognitive Sciences) from the University of California, Irvine, followed by a post-doctoral position at Rutgers the State University of New Jersey, learning single-cell recording and optical imaging techniques. Having determined that invasive measures were not her preferred techniques, she moved into functional and structural neuroimaging and was fascinated by the ability to measure brain function non-invasively. Since then, her research program uses neuroimaging of clinical populations to improve understanding of the structural and functional circuitry underlying mental illness and health, and integrates several approaches: The combination of imaging with genetics, to identify genotypes which might help individualize treatment and prognosis; structural and functional imaging across multiple institutions to develop robust clinical neuroimaging studies; use of these neuroimaging methods in schizophrenia and other disorders to determine the relationship between brain volume and functional characteristics with disease status and symptom profiles; and large-scale neuroimaging data sharing to support the international collaborations needed to perform imaging genetics analyses. Since 2013 she has been at Georgia State University as faculty in psychology and neuroscience, and the head of the Imaging Genetics and Informatics Laboratory.
Website: https://psychology.gsu.edu/profile/jessica-turner/The long predominant paradigm in neuroimaging has been to compare (mean) local volume or activity between groups, or to correlate these to behavioral phenotypes. Such approach, however, is intrinsically limited in terms of possible insight into inter-individual differences and application in clinical practice. Recently, the increasing availability of large cohort data and tools for multivariate statistical learning, allowing the prediction of individual cognitive or clinical phenotypes in new subjects, have started a revolution in imaging neuroscience.
The transformation of systems neuroscience into a big data discipline poses a lot of new challenges, yet the most critical aspects is the still sub-optimal relationship between the extremely wide feature-space from neuroimaging and the comparably low number of subjects. This, however, is only true when approaching neuroimaging machine-learning in a naïve fashion, i.e., when ignoring the large body of existing work on human brain mapping. The regional segregation of the brain into distinct modules as well as the large-scale, distributed networks provide the fundamental organizational principles of the human brain and hence the basis for cognitive information processing. Importantly, both can now be mapped in a highly robust fashion by integrating information on hundreds or even thousands of individual subjects to provide a priori information.
This talk will outline the fundamental principles of topographic organization in the human brain and the robust mapping of functional networks. I will then illustrate, how this knowledge on human brain organization can be leveraged for inference on socio-affective or cognitive traits in previously unseen individual subjects or psychopathology in mental disorders. Providing a bidirectional translation, such application will in turn provide information on the respective brain regions and networks.
Simon Eickhoff Full Professor & Director, The Institute for Systems Neuroscience, Heinrich-Heine University in Düsseldorf
Website: https://www.fz-juelich.de/inm/inm-7/EN/Home/home_node.html
Layer fMRI, requiring high field, advanced pulse sequences, and sophisticated processing methods, has emerged in the last decade. The rate of layer fMRI papers published has grown sharply as the delineation of mesoscopic scale functional organization has shown success in providing insight into human brain processing. Layer fMRI promises to move beyond being able to simply identify where and when activation is taking place as inferences made from the activation depth in the cortex will provide detailed directional feedforward and feedback related activity. This new knowledge promises to bridge invasive measures and those typically carried out on humans. In this talk, I will describe the challenges in achieving laminar functional specificity as well as possible approaches to data analysis for both activation studies and resting state connectivity. I will highlight our work demonstrating task-related laminar modulation of primary sensory and motor systems as well as layer-specific activation in dorsal lateral prefrontal cortex with a working memory task. Lastly, I will present recent work demonstrating cortical hierarchy in visual cortex using resting state connectivity laminar profiles.
Peter Bandettini Chief, Section on Functional Imaging Methods, National Institute of Mental Health
Dr. Bandettini received his undergraduate degree in Physics from Marquette University in 1989, and his Ph.D. in Biophysics in 1994 at the Medical College of Wisconsin where he led the effort to carry out one of the first successful experiments in functional MRI. He completed his post doc at the Massachusetts General Hospital NMR Center in 1996. After spending three years as an Assistant Professor at the Medical College of Wisconsin he was recruited in 1999 to become Director of the Functional MRI Facility at and Chief of the Section on Functional Imaging Methods the National Institutes of Health. Recently, he has become the founding Director of the Center for Multimodal Neuroimaging at the National Institute of Mental Health and has started a Machine Learning group and a Data Sharing group. He also recently completed a 6 year tenure as Editor In Chief of the Journal, NeuroImage. He is the recipient of the 2001 OHBM Wiley Young Investigator Award, and in 2020 was awarded the ISMRM Gold Medal. His research focus over the past 29 years has been on advancing functional MRI in all ways, including novel fMRI methods in acquisition, processing, and paradigm design. He current research focus is high resolution layer fMRI, dynamic connectivity, understanding and mitigating physiologic noise in fMRI time series, and deriving individual specific information using fMRI. He has published over 175 papers and has presented over 390 invited lectures.
We discuss recent attempts to understand complex brain dynamics at large scale using approaches from statistical physics. The motivation to adopt this approach is rooted in a more general question: "Why life is complex and –most importantly– what is the origin of the over abundance of complexity in nature?" This fundamental problem “is screaming to be answered but seldom is even being asked”, paraphrasing the late Per Bak.
In this lecture we will justify the approach by reviewing our attempts across several scales to understand the origins of complex biological problems from the perspective of critical phenomena. We will then offer an overview of the experimental and numerical results pertaining to complex aspects of large scale brain dynamics.
Dante Chialvo Head, Center for Complex Systems & Brain Sciences (CEMSC3), Universidad Nacional de San Martin
Dr. Dante R. Chialvo received his diploma in 1982 from the National University of Rosario, in Argentina. In 1985 was appointed Professor of the Department of Physiology of the University of Rosario. From 1987 to 1992 was Associate Professor in the State University of New York (Syracuse, NY) in the Department of Pharmacology and latter in the Computational Neuroscience Program. Between 1992 and 1995 was associated with the Santa Fe Institute for the Sciences of Complexity, in Santa Fe, New México. Until 2010, he was Full Professor at Northwestern University (Chicago), and UCLA, when he returned to Argentina as Principal Investigator of Conicet (Argentina).
Currently he is Full Professor and head of the Center for Complex Systems and Brain Sciences (Cemsc3) at the UNSAM (Universidad Nacional de San Martin) in Buenos Aires, Argentina.
Throughout these years, he has been Visiting Professor at numerous universities including Wuerzburg University (Germany), University of Copenhagen (Denmark), The Rockefeller University (U.S.A.), University of the Balearic Islands, University of Barcelona, University Complutense of Madrid, (Spain), Naples (Italy) and University of Rosario, University of Cordoba (Argentina), Universidad Mayor de San Andres, La Paz, (Bolivia) , Jagellonian Univ. (Krakow) among others.
Dr. Chialvo has published more than 100 scientific papers, all dedicated to understand natural phenomena from the point of view of Nonlinear Dynamics of Complex Systems. His work covers a wide range of topics, including the mathematical modeling of cardiac arrhythmias, the study of molecular motors as stochastic ratchets, neural coding, and self-organization and collective phenomena in ants swarms, brain and communities, among others. In 2005 he was the recipient of a Fulbright US Scholar Award (2005), in 2006 the Distinguished Visiting Professor of the University Complutense, (Psychology Department), Madrid, Spain, Visiting Professor Award of the Seconda Università degli Studi di Napoli, Aversa Italy and elected Fellow of the American Physical Society in 2007.
Website: http://www.chialvo.net/
Recommended Articles
The brain has all of the hallmarks of a complex system, with meaningful activity occurring at a wide range of spatial and temporal scales. When measured with resting state fMRI, all of this activity is compressed into a single measurement of the resulting hemodynamic response for each voxel at each time point. However, by leveraging the spatial, temporal and spectral properties of different types of activity, we may be able to identify signatures in the rs-fMRI signal. In this talk, I will describe some of the types of activity that we expect to contribute to the rs-fMRI signal and features that might allow us to selectively extract them for use in research or the clinic.
Recommended Articles:Shella Keilholz , Georgia Institution of Technology
Dr. Shella D. Keilholz received her B.S. degree in physics from the University of Missouri Rolla (now Missouri University of Science and Technology) and her Ph.D. degree in engineering physics at the University in Virginia. Her thesis focused on quantitative measurements of perfusion with arterial spin labeling MRI. After graduation, she went to Dr. Alan Koretsky’s lab at the NIH as a Postdoctoral Researcher to learn functional neuroimaging. She is currently a Professor in the joint Emory/Georgia Tech Biomedical Engineering Department, Atlanta, GA, USA and Program Director for the 9.4 T MRI. Her research seeks to elucidate the neurophysiological processes that underlie the BOLD signal and develop analytical techniques that leverage spatial and temporal information to separate contributions from different sources.
The spatial patterning of each neurodegenerative disease relates closely to a distinct structural and functional network in the human brain. This talk will mainly describe how brain network-sensitive neuroimaging methods such as resting-state fMRI and diffusion MRI can shed light on brain network dysfunctions associated with pathology and cognitive decline from preclinical to clinical dementia. I will first present our findings from two independent datasets on how amyloid and cerebrovascular pathology influence brain functional networks cross-sectionally and longitudinally in individuals with mild cognitive impairment and dementia. Evidence on longitudinal functional network organizational changes in healthy older adults and the influence of APOE genotype will be presented. In the second part, I will describe our work on how different pathology influences brain structural network and white matter microstructure. I will also touch on some new data on how brain network integrity contributes to behavior and disease progression using multivariate or machine learning approaches. These findings underscore the importance of studying selective brain network vulnerability instead of individual region and longitudinal design. Further developed with machine learning approaches, multimodal network-specific imaging signatures will help reveal disease mechanisms and facilitate early detection, prognosis and treatment search of neuropsychiatric disorders.
Juan (Helen) Zhou Principal Investigator, Center for Sleep and Cognition, Department of Medicine, Yong Loo Lin School of Medicine, National University of Singapore
Dr. Juan Helen ZHOU is an Associate Professor at the Center for Sleep and Cognition, and the Deputy Director, Center for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore (NUS). She is also affiliated with Duke-NUS Medical School. Her laboratory studies selective brain network- based vulnerability in neuropsychiatric disorders using multimodal neuroimaging and machine learning approaches. She received Bachelor degree and Ph.D. from School of Computer Science and Engineering, Nanyang Technological University, Singapore. Dr. Zhou was an associate research scientist at Department of Child and Adolescent Psychiatry, New York University. She did a post-doctoral fellowship at the Memory and Aging Center, University of California, San Francisco, and in the Computational Biology Program at Singapore-MIT Alliance. Dr. Zhou is currently the Council Member and a previous Program Committee member of the Organization of Human Brain Mapping. She serves as an Editor of multiple journals including Human Brain Mapping, NeuroImage, and Communications Biology.
Website: http://neuroimaginglab.org/members.html
Recommended Articles
Jeffrey Malins Assistant Professor, Department of Psychology, Georgia State University
Nice to meet you! I am an assistant professor in the Department of Psychology at Georgia State University, and I am also affiliated faculty with the GSU Center for Research on the Challenges of Acquiring Language and Literacy and the GSU Neuroscience Institute. Prior to joining the faculty at GSU, I was an Associate Research Scientist in Pediatrics at Yale University. I also completed a postdoctoral fellowship at Haskins Laboratories, where I remain a Research Affiliate.
My research focuses on the brain networks that support reading, spoken language processing, and attentional control. I use neuroimaging to study how these networks overlap, diverge, and change over the course of learning. I also examine how different biological, cognitive, and environmental factors shape the connectivity of these networks. In my research, I work with numerous populations of learners, including school-age children, adolescents, and adults; individuals with reading, language, and/or attention deficits; and individuals who speak or read more than one language.
Over the past few years, I have had the pleasure of working with several collaborators in the GSU community to study reading development in children. Using fMRI, we are currently looking at the intersection between the brain networks underlying reading and attentional control (Arrington, Malins, et al., 2019, Developmental Cognitive Neuroscience). We are also following up on a recent study suggesting that a certain amount of variability in brain activity may be beneficial for reading growth (Malins et al., 2018, Journal of Neuroscience). In the future, I am particularly interested in examining how diverse experiences with language – such as bilingual language experience in children – help to shape the brain networks that support literacy skills.
I look forward to continuing to build connections with the neuroimaging community in Atlanta and beyond. Together, I hope we can find ways to connect brain research with current practices in education in order to help individuals reach their learning potential.
Representative Publications:
Links:
I will recall how reduction in variance and reduction in surprise are two similar (and sometimes identical) ways of investigating dependencies in dynamical systems.
I will present how this framework can be used to investigate higher order dependencies to find information-based multiplets and to allow for a more precise characterization of multivariate patterns of connectivity.
I will then show how we can look at interactions across temporal scales, and present some applications in neuroscience.
Recommended Articles:Daniele Marinazzo Professor of Neuroimaging Data Analysis, Ghent University
I am a statistical physicist (MSc 2001, PhD 2007, University of Bari) who has always worked towards the characterization of the dynamics of complex systems, mainly the brain. From 2008 to 2011 I was a postdoc at CNRS, University Paris 5, performing in vivo electrophysiology and dynamic clamp experiments. Since 2011 I am Research Professor of Data Analysis at Ghent University, Belgium. I teach techniques of neuroimaging data analysis; I am member of the Belgian node of the International Neuroinformatics Coordinating Facility (INCF) and a mentor for Google Summer of Code on their behalf. I am co-editor in chief of Neurons, Behavior, Data Analysis, and Theory, deputy editor of PLOS Computational Biology, editor of NeuroImage, Network Neuroscience, Brain Topography, and PLOS One, editor of the PLOS complexity channel, and referee for many journals in the field of neuroscience and applied physics.
This talk will focus on the modelling of resting state time series or endogenous neuronal activity. I will survey recent developments in modelling distributed neuronal fluctuations – spectral dynamic causal modelling (DCM) for functional MRI [1, 2] – and how this modelling rests upon functional connectivity. The dynamics of brain connectivity has recently attracted a lot of attention among brain mappers. I will also show a novel method to identify dynamic effective connectivity using spectral DCM [3]. Further, I will summarise the development of the next generation of DCMs towards large-scale, whole-brain schemes which are computationally inexpensive [4], to the other extreme of the development using more sophisticated and biophysically detailed modelling based on the canonical micro circuits [5]
Recommended Articles:Adeel Razi Associate Professor & ARC DECRA Fellow, Turner Institute for Brain and Mental Health, Monash University
Adeel Razi is an Associate Professor at the Turner Institute for Brain and Mental Health, Monash University, Australia, where he is the Head of the Computational Neuroscience Laboratory. His research is cross-disciplinary, combining engineering, physics, and machine-learning approaches, to model complex, multi-scale, network dynamics of brain structure and function using neuroimaging. He is currently Australian Research Council DECRA Fellow (2017-2020) and has also been awarded NHMRC Investigator (Emerging Leader) Fellowship (2021-2025). He is an `Honorary’ Senior Research Fellow at the Wellcome Centre for Human Neuroimaging of University College London, where he also worked from 2012 to 2018. He received the B.E. degree in Electrical Engineering from the N.E.D. University of Engineering & Technology, Pakistan, the M.Sc. degree in Communications Engineering from the University of Technology Aachen (RWTH), Germany, and the Ph.D. degree in Electrical Engineering from the University of New South Wales, Australia in 2012.
Website: www.adeelrazi.org
State-of-the-art magnetic resonance imaging (MRI) provides unprecedented opportunities to study brain structure (anatomy) and function (physiology). Based on such data, graph representations can be built where nodes are associated to brain regions and edge weights to strengths of structural or functional connections. In particular, structural graphs capture major neural pathways in white matter, while functional graphs map out statistical interdependencies between pairs of regional activity traces. Network analysis of these graphs has revealed emergent system-level properties of brain structure or function, such as efficiency of communication and modular organization.
In this talk, graph signal processing (GSP) will be presented as a novel framework to integrate brain structure, contained in the structural graph, with brain function, characterized by activity traces that can be considered as time-dependent graph signals. Such a perspective allows to define novel meaningful graph-filtering operations of brain activity that take into account smoothness of signals on the anatomical backbone. This allows to define a new measure of “coupling” between structure and function based on how activity is expressed on structural graph harmonics. To provide statistical inference, we also extend the well-known Fourier phase randomization method to generate surrogate data to the graph setting. This new measure reveals a behaviorally relevant spatial gradient, where sensory regions tend to be more coupled with structure, and high-level cognitive ones less so. In addition, we also make a case to introduce the graph modularity matrix at the core of GSP, in order to incorporate knowledge about graph community structure when processing signals on the graph, but without the need for community detection. Finally, recent work will highlight how the spatial resolution of this type of analyses can be increased to the voxel level, representing a few hundredth thousands of nodes.
Recommended Articles:
Dimitri Van De Ville Professor of Bioengineering, EPFL and University of Geneva
Dimitri Van De Ville received the Ph.D. degree in computer science engineering from Ghent University, Belgium, in 2002. He was a post-doctoral fellow (2002-2005) at the lab of Prof. Michael Unser at the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, before becoming responsible for the Signal Processing Unit at the University Hospital of Geneva, Switzerland, as part of the Centre d’Imagerie Biomédicale (CIBM). In 2009, he received a Swiss National Science Foundation professorship and since 2015 became Professor of Bioengineering at the EPFL, jointly affiliated with the University of Geneva, Switzerland. His main research interest is in computational neuroimaging to advance cognitive and clinical neurosciences. His methods toolbox includes wavelets, sparsity, deconvolution, graph signal processing. He was a recipient of the Pfizer Research Award 2012, the NARSAD Independent Investigator Award 2014, the Leenaards Foundation Award 2016, and was elected Fellow of the IEEE in 2020.
Dr. Van De Ville serves as an Editor for the new journal NEUROIMAGE: REPORTS since 2020, as a Senior Editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSING since 2019 and as an Editor for the SIAM Journal on Imaging Science from 2018 on. He served as an Associate Editor for the IEEE TRANSACTIONS ON IMAGE PROCESSING from 2006 to 2009, the IEEE SIGNAL PROCESSING LETTERS from 2004 to 2006. He was the Chair of the Bio Imaging and Signal Processing (BISP) TC of the IEEE Signal Processing Society (2012-2013) and the Founding Chair of the EURASIP Biomedical Image & Signal Analytics SAT (2016-2018). He is Co-Chair of the biennial Wavelets & Sparsity series conferences, together with Y. Lu and M. Papadakis.
Website: miplab.epfl.ch
Good representations are critical to the success of both biological and artificial information processing systems. In this talk, I will highlight new approaches that my lab is developing for representation learning and alignment, and demonstrate their applications in the analysis and interpretation of both biological and artificial neural networks. Being able to align neural representations, promises meaningful ways of comparing high-dimensional neural activities across times, subsets of neurons, individuals, and potentially across disease.
Eva Dyer Assistant Professor, Georgia Institution of Technology & Emory University
Eva Dyer is an Assistant Professor in the Coulter Department of Biomedical Engineering at the Georgia Institute of Technology and Emory University. Dr. Dyer works at the intersection of neuroscience and machine learning, developing machine learning approaches to interpret complex neuroscience datasets, and designing new machine intelligence architectures inspired by the organization and function of biological brains. Dr. Dyer completed all of her degrees in Electrical & Computer Engineering, obtaining a Ph.D. and M.S. from Rice University, and a B.S. from the University of Miami. She is the recipient of a Sloan Fellowship in Neuroscience, an NSF CISE Research Initiation Initiative Award, was a previous Allen Institute for Brain Science Next Generation Leader, and was recently awarded a McKnight Award for Technological Innovations in Neuroscience.
Website: https://dyerlab.gatech.edu/
Two paradigms continue to spar in the neuroscience at the local and system levels. At the elemental level of neurons, and grounded in the prominent work of many cellular physiologists including Eccles, Hodgkin and Huxley, overwhelming evidence implicate directional flows of information from functional unit to functional unit. At another system level, and thrusted by giants of system Neuroscience like Freeman, Eckhorn, Gray and Singer, focus is shifted to ensemble activities that yield evidence of functional synchronization. In their extreme forms, both paradigms eschew one important property of complex functional systems. A strictly serial model of synaptic propagation labors to achieve mass action. And a completely collective system presents many roadblocks to a dynamics of its self-organized activities, making for a brain frozen in time and inefficient at adaptation and flexibility. I will place in the reconciliatory middle ground the theory of brain metastability initially pioneered by Kelso. Its spatiotemporal complexity is permissive of information flows at the same time as transient coordination provides collective power at multiple spatial scales. I will provide mathematical bases for its study in models that prepare for the empirical encounter of its phenomenology; I will outline empirical evidence of its pervasiveness. Although metastability is conceptually contiguous with the two aforementioned paradigms, I will describe the pitfalls that follow from analyzing it as approximations of them, and I will argue that a shift in perspective and methods is required to fully understand brain complexity.
Recommended Articles:Emmanuelle Tognoli Research Professor, Complex Systems and Brain Sciences, Florida Atlantic University
Dr. Tognoli is a Research Professor in Complex Systems and Brain Sciences at Florida Atlantic University. Her overarching scientific motivation is to understand brain function and dysfunction using the concepts and tools of complexity science. Her main research areas are spatiotemporal brain metastability, the neurophysiological basis of social behavior and the development of complex experimental systems for human-machine and neuro-technological interfaces. Her thinking has been enriched by numerous collaborations with psychiatrists, neurologists, neuropsychologists, ophthalmologists, physicists, mathematicians, behavioral and biological scientists, psychologists and engineers.
Emotion is central to human experience; it influences cognition, behavior, mental health, and well-being. Despite their importance and self-evidence, emotions are notoriously difficult to define scientifically. In this talk, I will present research combining approaches from cognitive neuroscience and machine learning to build models of human brain activity that track the engagement of distinct affective processes. I will discuss evidence suggesting that brain representations of emotional states are high-dimensional, distributed, and are inconsistent with intuitive psychological models that organize emotions along a small number of dimensions, such as valence and arousal. Building and validating quantitative models of human brain activity promises to provide insight into the nature of emotion provide novels targets for cognitive and therapeutic interventions.
Recommended Articles:Kragel Philip Assistant Professor of Psychology, Emory University
Dr. Kragel received his Bachelor of Science and Engineering (2006) and a Masters in Engineering Management (2007) at Duke University’s Pratt School of Engineering. He completed his Ph.D. in Psychology and Neuroscience (2015) at Duke University and subsequently trained as a postdoctoral fellow at the University of Colorado Boulder’s Institute of Cognitive Science. He is currently an Assistant Professor in the Psychology Department at Emory University. Dr. Kragel’s research explores the brain and computational basis of cognitive and affective behavior in humans, with a particular focus on understanding the nature of emotions – where they come from and what makes them unique from other mental phenomena. His research combines ideas from cognitive neuroscience and machine learning to build quantitative models that are both sensitive and specific to the engagement of individual mental processes.
Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, e.g., with independent component analysis (ICA) and sparse coding. ICA is well established in analysis of brain imaging data. However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e., it is not possible to recover those latent components that actually generated the data. Here, we show that this problem can be solved by using additional information, in particular in the form of temporal structure. Our methods are related to 'self-supervised' learning increasingly used in deep learning. Application of such methods to EEG/MEG analysis is a promising avenue for research.
Recommended Articles:Aapo Hyvärinen Professor of Computer Science (Machine Learning), University of Helsinki
Aapo Hyvarinen studied undergraduate mathematics at the universities of Helsinki (Finland), Vienna (Austria), and Paris (France), and obtained a Ph.D. degree in Information Science at the Helsinki University of Technology in 1997. After post-doctoral work at the Helsinki University of Technology, he moved to the University of Helsinki in 2003, where he was appointed Professor in 2008, at the Department of Computer Science. From 2016 to 2019, he was Professor at the Gatsby Computational Neuroscience Unit, University College London, UK. Aapo Hyvarinen is the main author of the books 'Independent Component Analysis' (2001) and 'Natural Image Statistics' (2009), Action Editor at the Journal of Machine Learning Research and Neural Computation, and has worked as Area Chair at ICML, ICLR, AISTATS, UAI, ACML and NeurIPS. His current work concentrates on unsupervised machine learning and its applications to neuroscience.
Learning to read change's mind and brain. How might bilingual experience influence children’s neural architecture for learning to read? Words have sounds and meanings. The neural architecture for learning to read includes the formation of sound-to-print and meaning-to-print neurocognitive pathways. Importantly, there is also significant cross-linguistic variation in how children form these associations. In phonologically transparent such as Italian, children develop stronger sound-to-print networks, whereas learners of Chinese form stronger meaning-to-print associations. To understand how bilingual experiences influence children’s developing neural architecture for learning to read we use fNIRS with Spanish-English and Chinese-English bilingual children in the US. Several key findings emerge from these data that we will discuss during the presentation. First, the findings reveal principled bilingual transfer effects of children’s phonological, morphological, and lexical processes for learning to read. Second, the findings reveal the Universal aspects of literacy development across these typologically-distinct languages. The findings are discussed in light of theoretical perspectives on learning to read, bilingual development, as well as the universal and language-specific aspects of language, literacy, and dyslexia.
Recommended Articles:Ioulia Kovelman Associate Professor of Psychology, University of Michigan
Dr. Ioulia Kovelman is an Associate Professor of Psychology at the University of Michigan. Bilingualism changes mind and brain. Dr. Kovelman is a developmental cognitive neuroscientist who uses optical fNIRS neuroimaging to understand the effects of bilingualism on children’s language, literacy, and brain development. Dr. Kovelan studies children within typical learners and those with language and reading difficulties. Dr. Kovelman holds a PhD from Dartmouth College and has completed post-doctoral training at the Massachusetts Institute of Technology. She is a recipient of NIH and NSF funding awards, among others. Dr. Kovelman welcomes cross-linguistic and cross-cultural collaborators, and students interested in the Bilingual Brain. To learn more, please visit Dr. Kovelman’s website at the University of Michigan.
To map the neural substrate of mental function, cognitive neuroimaging relies on controlled psychological manipulations that engage brain systems associated with specific cognitive processes. In order to build comprehensive atlases of cognitive function in the brain, it must assemble maps for many different cognitive processes, which often evoke overlapping patterns of activation. Such data aggregation faces contrasting goals: on the one hand finding correspondences across vastly different cognitive experiments, while on the other hand precisely describing the function of any given brain region.
In this talk I will present two analysis frameworks that tackle these difficulties and thereby enable the generation of brain atlases for cognitive function. The first one uses deep-learning techniques to extract representations—task-optimized networks—that form a set of basis cognitive dimensions relevant to the psychological manipulations. This approach does not assume any prior knowledge of the commonalities shared by the studies in the corpus; those are inferred during model training.
The second one leverages ontologies of cognitive concepts and multi-label brain decoding to map the neural substrate of these concepts. Crucially, it can accurately decode the cognitive concepts recruited in new tasks. These results demonstrate that aggregating independent task-fMRI studies can provide a more precise global atlas of selective associations between brain and cognition.
Recommended Articles:Bertrand Thirion Researcher, Inria, Parietal Team
Bertrand Thirion is the leader of the Parietal team, part of INRIA research institute, Saclay, France, that develops statistics and machine learning techniques for brain imaging. He contributes both algorithms and software, with a special focus on functional neuroimaging applications. He is involved in the Neurospin (CEA) neuroimaging center, one of the leading high-field MRI for brain imaging places. Bertrand Thirion is also head of the DATAIA Institute that federates research on AI, data science and their societal impact in Paris-Saclay University. He has recently been appointed as member of the expert committee in charge of advising the government during the Covid-19 pandemic.
Vestibular symptoms are one of the most frequent complaints after concussion and are associated with prolonged recovery. However, the underlying structural and functional alterations underlying post-concussion vestibular dysfunction are not well understood. Furthermore, what constitutes the vestibular network in humans has not been fully elucidated. In this talk, I will review the clinical findings of vestibular impairment following concussion and the current models of central vestibular processing. I will then discuss the data supporting central multisensory processing as the primary driver of subacute and chronic post-concussion vestibular symptoms. Defining the alterations in vestibular processing after head injury will allow better prognostication and more targeted, patient-centric neurorehabilitation therapies.
Recommended Articles:Jason Allen Associate Professor, Emory University
Dr. Allen received his BS in Cellular and Molecular Biology from Tulane University and his MD and PhD in Neuroscience from Georgetown University School of Medicine. His thesis focused on the modulation of neuronal injury by metabotropic glutamate receptors using cell culture and animal models of trauma. He then completed neurology and diagnostic radiology residencies as well as a 2-year neuroradiology fellowship at New York University. Dr. Allen is currently an Associate Professor of Radiology and Imaging Sciences and Neurology at Emory University and an Associate Professor of Biomedical Engineering at Georgia Institute of Technology. He is the Director of the Neuroradiology Division and the Medical Director of the Center for Systems Imaging at Emory University. His laboratory currently focuses on defining the changes in structural and functional brain connectivity after concussion, particularly in patients with vestibular impairment, and developing diagnostic and prognostic imaging markers for this disorder. In addition, he is interested in developing and refining neurorehabilitation techniques for post-concussion vestibular impairment as well as understanding neural plasticity related to successful therapy.
Functional differences in the default mode network and related memory systems are observed in typical aging. Our research, as an example, shows that particularly functional connectivity within the posterior memory system appears affected by aging. Furthermore, we found that negative subsequent memory effects may differentially support memory performance across the lifespan, suggesting a developmental maturation and age-related decline.
The posterior default mode regions also appear most vulnerable to early Alzheimer’s disease related brain changes. Subjective cognitive decline, the perceived decline in cognitive abilities in the absence of deficits on clinical assessments, is a known risk factor for Alzheimer’s disease. Our work shows that participants with subjective cognitive decline have lower default mode network functional connectivity compared to older adults without, particularly in posterior regions, suggesting possible early functional brain changes in older adults with subjective cognitive decline in similar brain regions as observed in early Alzheimer’s disease. However, as these findings are based on cross-sectional data, they don’t provide insight in actual brain changes in those with subjective cognitive decline. One of our current lines of research aims to address this gap in the literature by examining changes in functional connectivity over a three-year period. In line with our cross-sectional finding, our longitudinal results reveal a steeper decline in default mode network functional connectivity in older adults with subjective cognitive decline than those without.
Jessica Damoiseaux Associate Professor, Institute of Gerontology and Department of Psychology, Wayne State University
Dr. Jessica Damoiseaux is an Associate Professor in the Institute of Gerontology and Department of Psychology at Wayne State University. Dr. Damoiseaux received her MSc in Psychology from Utrecht University and PhD in Cognitive Neuroscience from VU University Amsterdam. She then went on to do a postdoctoral fellowship at Stanford University. She currently heads the Connect Lab. Her research investigates the application of MRI-derived brain measures, with an emphasis on brain network approaches, to study typical aging and early detection of neurodegenerative disease.
Even in the absence of external stimuli, neural activity is both highly dynamic and organized across multiple spatiotemporal scales. The continuous evolution of brain activity patterns during rest is believed to help maintain a rich repertoire of possible functional configurations that relate to typical and atypical cognitive phenomena. Whether these transitions or “explorations” follow some underlying arrangement or instead lack a predictable ordered plan remains to be determined. Here, using a precision dynamics approach, we aimed at revealing the rules that govern transitions in brain activity at rest at the single participant level. We hypothesized that by revealing and characterizing the overall landscape of whole brain configurations (or states) we could interpret the rules (if any) that govern transitions in brain activity at rest. To generate the landscape of whole-brain configurations we used Topological Data Analysis based Mapper approach. Across all participants, we consistently observed a rich topographic landscape in which the transition of activity from one state to the next involved a central hub-like “transition state.” The hub topography was characterized as a shared attractor-like basin where all canonical resting-state networks were represented equally. The surrounding periphery of the landscape had distinct network configurations. The intermediate transition state and traversal through it via a topographic gradient seemed to provide the underlying structure for the continuous evolution of brain activity patterns at rest. In addition, differences in the landscape architecture were more consistent within than between subjects, providing evidence of idiosyncratic dynamics and potential utility in precision medicine.
Recommended Articles:
Manish Saggar Assistant Professor, Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine
Manish is a computational neuroscientist, who is trained in machine learning, neuroscience and psychiatry. The overarching goal of his research is to develop reliable computational methods that will allow for characterizing and modeling temporal dynamics of brain activity, without necessarily averaging data in either space or time at the outset. He firmly believes that the spatiotemporal richness in brain activity might hold the key to finding the person- and disorder-centric biomarkers. Manish received his bachelor’s degree from the Indian Institute of Information Technology (Allahabad) and Masters and PhD from the University of Texas at Austin, advised by Drs. Risto Miikkulainen and Clifford Saron. He did his Postdoctoral Fellowship at the Stanford University School of Medicine, mentored by Dr. Allan Reiss. He is currently an assistant professor in the Department of Psychiatry & Behavioral Sciences at Stanford University School of Medicine and directs the Brain Dynamics Lab.
Cognitive processing and goal-directed behavior are hypothesized to be the result of frequency specific interactions of specialized but widely distributed cortical regions. Cross-frequency coupling (CFC) has been proposed to coordinate the neural dynamics between these distinct frequency bands across spatial and temporal scales. One particular form of CFC known as phase amplitude coupling (PAC) quantifies the interplay between the phase of a slower oscillation and the envelope of a faster oscillation. The existing methods for assessing PAC have some limitations including limited frequency resolution, sensitivity to noise, data length and sampling rate due to the inherent dependence on bandpass filtering. Moreover, most of the current applications of PAC to neuronal data focuses on average coupling within a single channel. In this talk, I will present some recent developments in the computation of PAC. First, I will introduce a new time-frequency based phase-amplitude coupling measure that addresses the biases encountered by Hilbert transform based PAC measures. Next, I will introduce an extension of PAC analysis from the bivariate to the multivariate case allowing us to look at whole brain cross-frequency coupling networks. Finally, I will present applications of these new PAC computation tools on EEG data collected during error monitoring.
Recommended Articles:Selin Aviyente Professor, Department of Electrical and Computer Engineering, Michigan State University
Selin Aviyente received her B.S. degree with high honors in Electrical and Electronics engineering from Bogazici University, Istanbul in 1997. She received her M.S. and Ph.D. degrees, both in Electrical Engineering: Systems, from the University of Michigan, Ann Arbor, in 1999 and 2002, respectively. She joined the Department of Electrical and Computer Engineering at Michigan State University in 2002, where she is currently a Professor and Associate Chair for Undergraduate Studies. Her research focuses on statistical and nonstationary signal processing, higher-order data representations and network science with applications to neurophysiological signals. She has authored more than 150 peer-reviewed journal and conference papers. She is the recipient of a 2005 Withrow Teaching Excellence Award, a 2008 NSF CAREER Award and a 2021 Withrow Excellence in Diversity Award. She is currently serving on several technical committees of IEEE Signal Processing Society and is the vice-chair of IEEE Bioimaging and Signal Processing (BISP) technical committee. She is an Associate Editor for IEEE Open Journal of Signal Processing and Digital Signal Processing.
As neuroscientists we want to understand how causal interactions or mechanisms within the brain give rise to perception, cognition, and behavior. It is typical to estimate interaction effects from measured activity using statistical techniques such as functional connectivity, Granger Causality, or information flow, whose outcomes are often falsely treated as revealing mechanistic insight. Since these statistical techniques fit models to low-dimensional measurements from brains, they ignore the fact that brain activity is high-dimensional. Here we focus on the obvious confound of common inputs: the countless unobserved variables likely have more influence than the few observed ones. Any given observed correlation can be explained by an infinite set of causal models that take into account the unobserved variables. Therefore, correlations within massively undersampled measurements tell us little about mechanisms. We argue that these mis-inferences of causality from correlation are augmented by an implicit redefinition of words that suggest mechanisms, such as connectivity, causality, and flow.
Recommended Articles:Konrad Kording Penn Integrated Knowledge Professor, University of Pennsylvania
Dr. Kording obtained both a diploma degree and a PhD in physics at ETH Zurich in 1997 and 2001, respectively. He then worked as a postdoctoral fellow at the Collegium Helveticum in Zurich and at University College London, followed by a Heisenberg Fellow position at MIT. He joined the faculty at Northwestern University and the Rehabilitation Institute of Chicago where he was a professor of physical medicine and rehabilitation, physiology, and applied mathematics. In 2017, he joined the faculty at the University of Pennsylvania with joint appointments in the Department of Neuroscience and Department of Bioengineering.
Turbulence is a special dynamical state driving many physical systems by way of its ability to facilitate fast energy/information transfer across scales. These qualities are important for brain function, but it is currently unknown if the brain also exhibits turbulence as a fundamental organisational principle. Using large-scale neuroimaging empirical data from 1003 healthy participants, we demonstrate amplitude turbulence in human brain dynamics. Furthermore, we build a whole-brain model with coupled oscillators to demonstrate that the best fit of our model to the data corresponds to a region of maximally developed amplitude turbulence, which also corresponds to maximal sensitivity to the processing of external stimulations (information capability). The model shows the economy of anatomy by following the Exponential Distance Rule of anatomical connections as a cost-of-wiring principle. This establishes a firm link between turbulence and optimal brain function. Overall, our results reveal a novel way of analysing and modelling whole-brain dynamics that for the first time ever establishes turbulence as a fundamental basic principle of brain organisation.
Recommended Articles:Gustavo Deco Professor, Institució Catalana de Recerca i Estudis Avançats (ICREA) and Pompeu Fabra University (UPF)
Gustavo Deco is Research Professor at the Institució Catalana de Recerca i Estudis Avançats (ICREA) and Professor (Catedrático) at the Pompeu Fabra University (UPF) where he leads the Computational Neuroscience group. He is also Director of the Center of Brain and Cognition (UPF). In 1987 he received his PhD in Physics for his thesis on Relativistic Atomic Collisions. In 1987, he was a postdoc at the University of Bordeaux in France. From 1988 to 1990, he obtained a postdoc of the Alexander von Humboldt Foundation at the University of Giessen in Germany. From 1990 to 2003, he leads the Computational Neuroscience Group at Siemens Corporate Research Center in Munich, Germany. He obtained in 1997 his Habilitation (maximal academical degree in Germany) in Computer Science (Dr. rer. nat. habil.) at the Technical University of Munich for his thesis on Neural Learning. In 2001, he received his PhD in Psychology at the Ludwig-Maximilians-University of Munich.
Significant concerns have been raised regarding the reproducibility of current scientific practices. I will provide an overview of the problems with reproducibility (focusing in particular on neuroimaging research) and outline a set of tools that provide the ability to build fully reproducible analysis workflows. I will also discuss the need for transparency through data sharing, and the importance of community standards for data organization to make data sharing effective.
Recommended Articles:Russ Poldrack Professor, Department of Psychology and Computer Science; Director of the Stanford Center for Reproducible Neuroscience, Stanford University
Russell A. Poldrack is the Albert Ray Lang Professor in the Department of Psychology and Professor (by courtesy) of Computer Science at Stanford University, and Director of the Stanford Center for Reproducible Neuroscience. His research uses neuroimaging to understand the brain systems underlying decision making and executive function. His lab is also engaged in the development of neuroinformatics tools to help improve the reproducibility and transparency of neuroscience, including the Openneuro.org and Neurovault.org data sharing projects and the Cognitive Atlas ontology.
The recent explosion of neuroimaging studies in large-scale populations of humans has begin to reveal complex mappings between brain and behavior. Cross-sectional studies do not allow, however, for exploring causality in the brain. Lesion studies have historically allowed direct linking between an area of damage and a specific behavioral change, and more recent virtual lesion (TMS), ECoG and pharmacological experiments have allowed exploration of the effect of manipulating brain function on behavior. These type of studies take steps in the direction of understanding causality in brain-behavior relationships. In her talk, Amy will discuss recent work in this area and present some of her own work studying the effect of damage on both brain structure/function and behavior, how the brain recovers from damage and how psychedelics may impact whole-brain function.
Recommended Articles:Amy Kuceyeski Associate Professor of Mathematics; Adjunct Associate Professor of Computational Biology, Cornell University
Amy Kuceyeski is an Associate Professor of Mathematics in the Radiology Department at Weill Cornell Medicine and an Adjunct Associate Professor in the Computational Biology Department at Cornell University. She was awarded her PhD in 2009 from Case Western Reserve University and spent her postdoctoral fellowship and early faculty years at Weill Cornell Medicine. For over a decade, Amy has been interested in understanding how the human brain works in order to better diagnose, prognose and treat neurological disease and injury. Quantitative approaches, including mathematical modeling and machine learning, applied to data from rapidly evolving neuroimaging techniques, have the potential to enable ground-breaking discoveries about how the brain works. Amy has particular interest in lesion-symptom mapping, non-invasive brain stimulation and pharmacological interventions, like psychedelics, that may be used to modulate brain activity and promote recovery from disease or injury.
Research on the emotional and motivational brain often employs relatively static paradigms, such as the presentation of emotion-laden faces. Because natural behaviors evolve temporally, advancing understanding of dynamic processes holds promise in opening new research avenues. In this presentation, I will discuss recent attempts to develop paradigms to study the dynamics of threat- and reward-related processes, as well as their interactions. I will also describe work exploring how recurrent neural networks can be used to characterize spatio-temporal dynamics as measured by functional MRI data, and how recurrent neural networks can be used to study more naturalistic/dynamic paradigms.
Luiz Pessoa Director, Maryland Neuroimaging Center; Professor, Department of Psychology, University of Maryland
Luiz Pessoa obtained a PhD in computational neuroscience at Boston University. After his PhD he returned to his home country, Brazil, and joined the faculty of Computer Systems Engineering at the Federal University of Rio de Janeiro. After a few years, he returned to the US as a Visiting Fellow at the National Institute of Mental Health. He then joined the Department of Psychology at Brown University as an Assistant Professor, the Department of Psychological and Brain Sciences at Indiana University, Bloomington, as an Associate Professor, and since 2011 has been at the Department of Psychology, University of Maryland, College Park, where he is full Professor and Director of the Maryland Neuroimaging Center. His research interests center around the interactions between emotion/motivation and perception/cognition. He published in 2013 the book 'The cognitive-emotional brain: from interactions to integration' and his book 'The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together', also by MIT Press, is scheduled for to be published later this year.
The UK Biobank is a longitudinal population neuroimaging dataset with extensive neuroimaging, genomic, and phenotypic measures for up to N=100,000 older-age participants. The statistical power, deep phenotyping, and lack of exclusion criteria in this cohort provide an important opportunity to investigate ‘real-world’ mental health. I will present several studies performed in our lab over the past three years to validate UK Biobank mental health measures, identify reproducible multivariate neural correlates of mental health, parse clinical and biological heterogeneity in mental health, and identify lifestyle factors that promote resilience to mental health.
Recommended Articles:Janine Bijsterbosch Assistant Professor, Computational Imaging Research Center, Department of Radiology, Washington University in St. Louis
Dr. Bijsterbosch is an Assistant Professor in the Computational Imaging Research Center of the Department of Radiology at Washington University in St Louis. The Personomics Lab headed by Dr. Bijsterbosch aims to understand how brain connectivity patterns differ from one person to the next, by studying the “personalized connectome”. Using population datasets such as the UK Biobank, the Personomics Lab adopts cutting edge analysis techniques to study multivariate imaging measures associated with mental health symptomatology, heterogeneity, and resilience. In addition, Dr. Bijsterbosch is an advocate for Open Science serving as Editor of Open Data Replication Reports for the NeuroImage family of journals, and as Chair of the Open Science Special Interest Group within the Organization for Human Brain Mapping. Dr. Bijsterbosch is also engaged with international teaching efforts as Co-Chair of the FSL Course Organizing Committee and lead author of a textbook on functional connectivity analyses, which was published by Oxford University Press in 2017.
Armin Raznahan ,
As the field of music cognition is rapidly burgeoning, researchers are beginning to consider how the unique amalgam of scientific and humanistic study of music may translate towards large-scale interventions that may improve cognition for many, including but not limited to people from neurodiverse populations. In this talk I will examine novel ways in which music cognition research may help improve cognition, in ways that move away from overused tropes (e.g. the Mozart Effect) towards future directions of use-inspired music cognition research. As use cases, I will describe some recent studies in my lab that capitalize on new musical technology, developed from first principles from music cognition research, to help those with attention deficits, memory disorders, and Parkinson's Disease. Our results show how music cognition can help refine and target music-based interventions for multiple special populations, by pinpointing ways in which music capitalizes on fundamental operating characteristics of the brain.
Recommended Articles:Psyche Loui Associate Professor of Creativity and Creative Practice in the Department of Music, Northeastern University
Psyche Loui is the Associate Professor of Creativity and Creative Practice in the Department of Music at Northeastern University. She graduated from University of California, Berkeley with her PhD in Psychology, and attended Duke University as an undergraduate with degrees in Psychology and Music. In the MIND (Music, Imaging, and Neural Dynamics) lab, Dr. Loui studies the neuroscience of music perception and cognition, tackling questions such as: What gives people the chills when they are moved by a piece of music? How does connectivity in the brain enable or disrupt music perception? Can music be used to help those with neurological and psychiatric disorders? Dr. Loui’s work has received multiple grants from the Grammy foundation, a young investigator award from the Positive Neuroscience Institute, and a Career award from the National Science Foundation, and has been featured by the Associated Press, New York Times, Boston Globe, BBC, CNN, the Scientist magazine, and other news outlets.
As advances in technology allow the acquisition of complementary information, it is common for scientific studies to collect multiple datasets. In this talk, we will examine two approaches to data integration. In the first part, we will analyze the joint and individual structure in cognitive assessments and brain morphometry from the Alzheimer’s Disease Neuroimaging Initiative. We introduce probabilistic joint and individual variation (ProJIVE), which extends probabilistic PCA to multiple datasets. ProJIVE reveals links between brain regions and cognitive performance. In the second part of this talk, we introduce Simultaneous Non-Gaussian component analysis (SING) in which dimension reduction and feature extraction are achieved simultaneously. Instead of maximizing variance using PCA, SING maximizes non-Gaussianity to extract a lower-dimensional subspace and reveal new insights. We apply our method to a working memory task and resting-state correlations from the Human Connectome Project. We find joint structure as evident from learned spatial correspondence. Moreover, some of the subject scores are related to fluid intelligence.
Recommended Article:Benjamin Risk Assistant Professor, Dept. of Biostatistics & Bioinformatics, Emory University
Benjamin Risk is an Assistant Professor in the Department of Biostatistics & Bioinformatics, Rollins School of Public Health, Emory University. He completed his PhD in Statistics at Cornell University (2015) and was a postdoctoral associate with the Statistical and Applied Mathematical Sciences Institute and the University of North Carolina, Chapel Hill (2017). Benjamin Risk’s research focuses on neuroimaging and aims to further scientific understanding and medical research by developing, improving, and disseminating statistical methodology. His research includes dimension reduction methods, multimodal data integration, and the statistical impacts of MRI acquisition methods.
Maximiliana Rifkin ,
Comparison and integration of neuroimaging data from different brains and populations is fundamental in neuroscience in that it underlies countless statistically meaningful conclusions about the human brain. However, the uniqueness of each human brain imposes fundamental challenges to existing approaches that aim to compare and integrate brain science data across individuals and populations. This longstanding challenge has escalated and become more urgent with the recent dramatic growth of publicly available brain science data, particularly neuroimaging data. Despite numerous efforts in the brain science field over the past few decades, there is still a fundamental lack of basic understanding and concrete representation of the regularity and variability of the human brain. In this talk, I will share the research experience on representation of human brain commonality and individuality using neuroimaging data in the Cortical Architecture Imaging and Discovery Lab at the University of Georgia in the past decade. I will introduce the opportunities and challenges in creating a universal and individualized brain reference system that encodes functional localizations of brain structures by fiber connection patterns and topographic folding patterns, which possess finer granularity, better functional homogeneity, more accurate functional localization, and intrinsically established correspondence across different brains.
Richard Betzel Assistant Professor, Psychological and Brain Sciences, Indiana University Bloomington
Undergraduate in Physics at Oberlin College, Ohio. PhD at Indiana University in psychological and brain sciences/cognitive science with Olaf Sporns. Postdoc at University Pennsylvania in Bioengineering with Danielle Bassett. Started the “brain networks and behavior lab” at Indiana University in 2018. Our aim is to characterize the architecture of macro-scale brain networks and understand its roles in cognition/disease/development.
Comparison and integration of neuroimaging data from different brains and populations is fundamental in neuroscience in that it underlies countless statistically meaningful conclusions about the human brain. However, the uniqueness of each human brain imposes fundamental challenges to existing approaches that aim to compare and integrate brain science data across individuals and populations. This longstanding challenge has escalated and become more urgent with the recent dramatic growth of publicly available brain science data, particularly neuroimaging data. Despite numerous efforts in the brain science field over the past few decades, there is still a fundamental lack of basic understanding and concrete representation of the regularity and variability of the human brain. In this talk, I will share the research experience on representation of human brain commonality and individuality using neuroimaging data in the Cortical Architecture Imaging and Discovery Lab at the University of Georgia in the past decade. I will introduce the opportunities and challenges in creating a universal and individualized brain reference system that encodes functional localizations of brain structures by fiber connection patterns and topographic folding patterns, which possess finer granularity, better functional homogeneity, more accurate functional localization, and intrinsically established correspondence across different brains.
Recommended Article:Tianming Liu Professor, Computer Science, University of Georgia
Dr. Tianming Liu is a Distinguished Research Professor (since 2017) and a Full Professor of Computer Science (since 2015) at University of Georgia (UGA). Dr. Liu is also an affiliated faculty (by courtesy) with UGA Bioimaging Research Center (BIRC), UGA Institute of Bioinformatics (IOB), UGA Neuroscience PhD Program, and UGA Institute of Artificial Intelligence (IAI). Dr. Liu’s primary research interests are brain imaging, computational neuroscience, and brain-inspired artificial intelligence, and he has published over 380 papers in this area. Dr. Liu is the recipient of the NIH Career Award (2007-2012) and the NSF CAREER Award (2012-2017). Dr. Liu is a Fellow of AIMBE (inducted in 2018) and was the General Chair of MICCAI 2019.
Spontaneous fMRI signals provide a valuable window into human brain functional organization. Recent work has also demonstrated the potential for extracting information about dynamic internal states from fMRI. In this talk, I will discuss studies in which we use multimodal functional imaging to investigate the dynamics of spontaneous brain activity and to track ongoing changes in alertness and autonomic physiology.
Recommended Article:Catie Chang Assistant Professor of Electrical and Computer Engineering, Computer Science, and Biomedical Engineering, Vanderbilt University
Catie Chang is an Assistant Professor of Electrical and Computer Engineering, Computer Science, and Biomedical Engineering at Vanderbilt University. She received her Ph.D. from Stanford University, and was a postdoctoral fellow in the NIH Intramural Research Program. Her lab, the Neuroimaging and Brain Dynamics Lab, seeks to advance understanding of human brain function through techniques for analyzing and interpreting neuroimaging data.
Conventional neuroradiological diagnosis of epilepsy still relies on qualitative inspection of clinical MRI. Epilepsy is associated with subtle quantitative abnormalities that have not been leveraged to improve diagnostic accuracy. In this talk, we will discuss applications of deep learning to address this scientific and clinical gap.
Recommended Article:Leonardo Bonilha Professor, Department of Neurology, Emory University
I am a Professor of Neurology and clinician scientist at Emory University. I am a clinical neurophysiologist and epileptologist, and I also have graduate and post-graduate degrees in computational neurosciences, clinical research and neuroimaging. Overall, my research is focused on improving the understanding of the mechanisms that underlie neurological impairments, epilepsy and language processing. I am directly involved in mechanistic research projects related to epilepsy or aphasia. My research is also related to clinical trials for aphasia treatments. In epilepsy research, I am the corresponding MPI on an R01 project to assess connectome markers of epilepsy surgical outcomes. Related to language and aphasia, I am the PI for an NIDCD supported R01 project on biomarkers of aphasia recovery using the brain connectome. I am the corresponding MPI for the phase II clinical trial speech entrainment for aphasia recovery (SpARc), and the PI of a core project related to brain health and aphasia recovery, as part of the ongoing P50 Center for the Study of Aphasia Recovery (C- STAR, PI Fridriksson).
Artificial neural networks can successfully play video games, yet these AI agents have difficulty adapting to changes in the game environment, or transfer knowledge across different games. As human players can efficiently transfer skills across environments, the Courtois NeuroMod team is working to align the representations of artificial neural networks with human players [1]. We first designed and validated a fully MRI-compatible video game controller [2]. The data collected for this project are part of an extremely deep individual fMRI sample currently featuring up to 140 hours of fMRI per subject (N=6), made available for the community as part of the Courtois NeuroMod data bank (https://cneuromod.ca). We successfully trained artificial agents to imitate the actions of humans playing the game “Shinobi III: revenge of the ninja master” and found that the internal representations of the agents could be used to effectively predict individual brain activity measured with functional magnetic resonance imaging [3]. This work could open new avenues to train robust AI video game characters, and gain new insights in brain representations for active and complex stimuli.
Pierre Bellec Associate Professor, Department of Psychology, University of Montreal
Pierre Bellec is an associate professor at the department of Psychology of University of Montreal. His main research interest is to train artificial neural networks to mimic human brain activity and behavior, at the level of individuals.
While neuroimaging has transformed neuroscience by allowing us to map detailed in vivo structural and functional organizations of human brain, technologies to probe the rich molecular complexity and underpinnings of brain functions in vivo are lacking. Magnetic resonance spectroscopic imaging (MRSI) allows for multiplexed molecular imaging and metabolic profiling of the brain in vivo, but its applications have been limited by low sensitivity, poor spatial resolution, slow imaging speed and challenges in separating molecular signals of interest. In this talk, I will discuss our efforts in addressing these challenges. Specifically, I will present our progresses on achieving simultaneous, high-resolution mapping of metabolites, neurotransmitters, and their biophysical parameters using a quantitative multidimensional MRSI approach that builds on and expanding a subspace imaging framework. I will also discuss how we integrate physics-based modeling and machine learning to address the limitations of subspace modeling. Finally, I will discuss our collaborative efforts on clinical translations of the new imaging technology. We expect these developments to create new tools to help better understand the molecular basis of brain function and diseases, improve diagnosis and treatment assessment.
Recommended Article:Fan Lam Assistant professor, Department of Bioengineering, University of Illinois Urbana-Champaign
Dr. Fan Lam graduated from Tsinghua University with his BS in Biomedical Engineering. He received his PhD in Electrical and Computer Engineering from the University of Illinois Urbana-Champaign (UIUC, 2015). Currently, he is an assistant professor in the Department of Bioengineering at UIUC, a full-time faculty member with the Beckman Institute for Advanced Science and Technology and a co- director of the Master of Science in Biomedical Image Computing program at UIUC. Lam's research focuses on developing advanced magnetic resonance-based molecular imaging and multimodal brain mapping methods, and their applications to the study of brain function at normal and diseased states. Dr. Lam is a Junior Fellow of ISMRM (International Society of Magnetic Resonance in Medicine), and a recipient of an NSF CAREER Award (2020). Other awards include a Best Student Paper Award from IEEE-ISBI (International Symposium of Biomedical Imaging, 2015), Robert T. Chien Memorial Award from ECE-UIUC (2015), an NIH-NIBIB Trailblazer Award (2020), and an NIH-NIGMS MIRA R35 Award (2021). Dr. Lam is a senior member of IEEE, serves as an Associate Editor for IEEE Transactions on Medical Imaging and a co-chair of the Young Scholar Committee at the World Association for Chinese Biomedical Engineers (WACBE).
Persistent homology summarizes the changes of topological structures in data through over multiple scales called filtrations. Doing so detect hidden topological signals that persist over different scales. However, a key obstacle of applying persistent homology to brain networks has been the lack of robust statistical inference framework. To address this problem, we present a new topological inference procedure based on the Wasserstein distance. Our approach has no explicit models and statistical distributional assumptions. The inference is performed in a completely data driven fashion. Our metric-based inference significantly differs from traditional feature-based topological data analysis (TDA). The method is applied to the resting-state functional magnetic resonance images (rs-fMRI) of the temporal lobe epilepsy patients and able to localize brain regions that contribute the most to topological differences. We made computer code available at Github . The talk is based on Anand and Chung 2023, IEEE Transactions on Medical Imaging (arXiv:2110.14599) and Songdechakraiwut and Chung 2023 Annals of Applied Statistics (arXiv:2012.00675).
Recommended Article:Moo K. Chung Associate Professor, Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison
Moo K. Chung, Ph.D. is an Associate Professor in the Department of Biostatistics and Medical Informatics at the University of Wisconsin-Madison (http://www.stat.wisc.edu/~mchung). Chung is affiliated with the Waisman Laboratory for Brain Imaging and Behavior and the Department of Statistics. Chung received PhD from McGill University under Keith Wolseley and trained at the Montreal Neurological Institute. Chung’s research focuses on computational neuroanatomy, spectral geometry, and topological data analysis. Chung mainly concentrates on the methodological development required for quantifying and contrasting brain functional, anatomical shape and network variations in both normal and clinical populations using various mathematical, statistical, and computational techniques. He has published three books on neuroimage computation including Brain Network Analysis published through Cambridge University Press in 2019. Currently started writing a new book on Topological Data Analysis for Brain Imaging.
Researchers have sought for more than 100 years for a lesion (or lesions) that might lead to schizophrenia, the major psychotic disorder. Early imaging modalities revealed enlarged ventricles, which imply reduced brain volumes, but no characteristic cortical pathology was identified. High-resolution structural magnetic resonance imaging (MRI), finally provided a tool to identify cortical volume loss in vivo. Volumetric studies revealed not only initial gray matter loss in temporal and frontal cortices, but progressive loss after the emergence of psychosis. Advances in 3D microscopy in post-mortem studies indicated increased packing density of cortical pyramidal cells and reduced dendritic arborization in schizophrenia, leading to the idea that the disease reflected dendrotoxicity. Although specific areas (such as frontal and temporal cortex) showed more robust volume loss, the idea that schizophrenia reflected a dysconnectivity among brain regions emerged. Volumetric studies suggested that the areas with greatest loss were areas with the most extensive connectivity – heteromodal cortices or areas that served as hubs. Recently, thought has focused on structural and functional dysconnectivity in schizophrenia, with the emerging concept of the disorder being one of circuitopathy. Decomposing source-activity from high-temporal resolution methods such as EEG and MEG into spectral components allows for functional and effective connectivity measures between areas. Methods for spectral effective connectivity still need development and validation. Issues include reliable and valid source localization, methods for data reduction (particularly parcellation), identification of critical frequency bands underlying inter-areal communication, and derivation of networks from whole brain data. Data and approaches from my laboratory will be used to illustrate our approaches, and how spectral connectivity indicates alpha-band and theta-band dysconnectivity between cortical areas are central deficits in psychosis, even early in disease course at the emergence of psychosis.
Recommended Article:Dean Salisbury Associate Professor, Department of Biostatistics and Medical Informatics, University of Pittsburgh School of Medicine
After graduating from the Scholar's Program at Whittier College in 1985, Dr Salisbury began studying Biological Psychology and human auditory neurophysiology with Prof. Nancy K Squires at Stony Brook. He began a post-doctoral fellowship in 1990 in Biological Psychiatry with Prof. Robert W McCarley at Harvard Medical School to examine auditory neurophysiology in schizophrenia. The 2 year post-doc turned into a 22 year career at Harvard, where Dr Salisbury worked with Dr McCarley, Prof. Martha E Shenton, and many others examining neurophysiological and MRI structural and functional measures of impaired sensation, perception, and basic memory function in first episode psychosis. The work conducted in his laboratory at McLean Hospital helped to change the conceptualization of schizophrenia as a static, perinatal encephalopathy. It pioneered the combined use of structural brain imaging and electroencephalographic (EEG) measurement of auditory cortex responses to demonstrate that progressive gray matter loss during the early disease course of schizophrenia was linked to progressive auditory impairment. In 2012, he left Harvard to join the faculty at Western Psychiatric Hospital at the University of Pittsburgh School of Medicine. The continuing multimodal imaging work in first episode psychosis individuals aims to identify local and distributed circuit abnormalities in early disease course and to develop biomarkers to facilitate early identification of the disorder.
Affect and motivation — pleasure and pain, desire and threat — are central to human life. Their experience defines our wellbeing, and the brain processes that underlie them drive behavior and learning. Developing models of the brain circuits that underlie them, and how they interact, could transform how we understand and measure them, and provide biological targets for interventions ranging from drugs to psychotherapy. Neuroimaging, including Functional Magnetic Resonance Imaging, is playing a transformative role in our ability to model the brain bases of affective and motivational processes. However, developing such models will require computational advances, particularly in our ability to model how emergent properties like pain arise from complex interactions among brain systems. In this talk, I describe a series of studies that combine fMRI imaging with statistical and machine learning to develop measures that are sensitive and specific for particular types of pain and affect and generalizable across diverse populations. These studies provide a brain-based picture of the organization of pain and affect, revealing both distinctions and similarities that are not predicted by folk psychological theories. We find that on one hand, pain and other affective states are distributed, relying on interactions across multiple brain systems. At the same time, however, new techniques provide ways of decomposing these systems into particular pathways that can be referenced to animal models and targeted by interventions.
Tor Wager Distinguished Professor, Department of Psychological and Brain Sciences, Dartmouth College, Dartmouth College
He received his Ph.D. from the University of Michigan in Cognitive Psychology in 2003, and served as an Assistant (2004-2008) and Associate Professor (2009) at Columbia University, and as Associate (2010-2014) and Full Professor (2014-2019) at the University of Colorado, Boulder. Since 2004, he has directed the Cognitive and Affective Neuroscience laboratory, a research lab devoted to work on the neurophysiology of affective processes—pain, emotion, stress, and empathy—and how they are shaped by cognitive and social influences. Dr. Wager and his lab are also dedicated to developing analysis methods for functional neuroimaging and sharing ideas, tools, and scientific data with the scientific community and public.
In this talk I will present our work, based on the development and applications of mathematical methods from Information Theory and Machine Learning, to study how the functions of neural population codes emerge form the interaction of different neurons.
Recommended Article:Stefano Panzeri Full professor and director, Department of Excellence for Information Processing, Medical School in Hamburg (UKE)
Stefano Panzeri is full professor and director at the Department of Excellence for Information Processing at the Medical School in Hamburg (UKE). Stefano was trained and researched originally in theoretical physics (string theory) and has worked in computational neuroscience for more than 20 years. His research lies at the interface between theory and experiments and investigates how the functions of the brain originate from the interactions between its elements, the neurons.
Computational neuroscience is a burgeoning field embracing exciting scientific questions, a deluge of data, an imperative demand for quantitative models, and a close affinity with artificial intelligence. These opportunities promote the advancement of data-driven machine learning methods to help neuroscientists deeply understand our brains. In particular, my work lies in such an interdisciplinary field and spans the development of scientifically-motivated probabilistic modeling approaches for neural and behavior analyses. In this talk, I will first present my work on developing Bayesian methods to identify latent manifold structures with applications to neural recordings in multiple cortical areas. The models are able to reveal the underlying signals of neural populations as well as uncover interesting topography of neurons where there is a lack of knowledge and understanding about the brain. Discovering such low-dimensional signals or structures can help shed light on how information is encoded at the population level, and provide significant scientific insight into the brain. Next, I will talk about probabilistic priors that encourage region-sparse activation for brain decoding. The proposed model provides spatial decoding weights for brain imaging data that are both more interpretable and achieve higher decoding performance. Finally, I will introduce a series of works on semi-supervised learning for animal behavior analysis and understanding. I will show that when we have a very limited amount of human-labeled data, the semi-supervised learning frameworks can well resolve the scarce data issue by leveraging both labeled and unlabeled data in the context of pose tracking, video understanding, and behavioral segmentation. By actively working on both neural and behavioral studies, I hope to develop interpretable machine learning and Bayesian statistical approaches to understanding neural systems integrating extensive and complex behaviors, thus providing a systematic understanding of neural mechanisms and biological functions.
Recommended Article:Anqi Wu Assistant Professor, School of Computational Science and Engineering (CSE), Georgia Institute of Technology
Anqi Wu is an Assistant Professor at the School of Computational Science and Engineering (CSE), Georgia Institute of Technology. She was a Postdoctoral Research Fellow at the Center for Theoretical Neuroscience, the Zuckerman Mind Brain Behavior Institute, Columbia University. She received her Ph.D. degree in Computational and Quantitative Neuroscience and a graduate certificate in Statistics and Machine Learning from Princeton University. Anqi was selected for the 2018 MIT Rising Star in EECS, 2022 DARPA Riser, and 2023 Alfred P. Sloan Fellow. Her research interest is to develop scientifically-motivated Bayesian statistical models to characterize structure in neural data and behavior data in the interdisciplinary field of machine learning and computational neuroscience. She has a general interest in building data-driven models to promote both animal and human studies in the system and cognitive neuroscience.
Research in neuroscience often assumes universal neural mechanisms, but increasing evidence points towards sizeable individual differences in brain activations. What remains unclear is the extent of the idiosyncrasy and whether different types of analyses are associated with different levels of idiosyncrasy. Here we develop a new method for addressing these questions. The method consists of computing the within-subject reliability and subject-to-group similarity of brain activations and submitting these values to a computational model that quantifies the relative strength of group- and subject-level factors. We apply this method to a perceptual decision-making task and find that activations related to trial-level task, reaction time (RT), and confidence are influenced equally strongly by group- and subject-level factors. However, for activations related to average RT or confidence in a block of trials, the subject-level factors can be up to 6 times more important than group-level factors. In all cases, group- and subject-level factors are dwarfed by a noise factor. Overall, our method allows for the quantification of group- and subject-level factors of brain activations and thus provides a more detailed understanding of the idiosyncrasy levels in brain activations.
Recommended Article:Dobromir Rahnev Associate Professor, School of Psychology, Georgia Institute of Technology
Dr. Rahnev received his Ph.D. in Psychology from Columbia University in 2012. After completing a 3-year post-doctoral fellowship at UC Berkeley, he joined Georgia Tech in 2015 where he is currently Blanchard Early Career professor. His research focuses on perceptual decision making – the process of internally representing the available sensory information and making decisions on it. Dr. Rahnev uses a wide variety of methods such as functional magnetic resonance imaging (fMRI), transcranial magnetic stimulation (TMS), psychophysics, and computational modeling. Dr. Rahnev’s work appears in high-impact journals such as Behavioral and Brain Sciences, PNAS, Nature Communications, and Nature Human Behavior. He has received over $3.5M in funding, including PI grants from NIH, NSF, and the Office of Naval Research.
Brain connectomics has become increasingly important in neuroimaging studies to advance understanding of neural circuits and their association with neurodevelopment, mental illnesses, and aging. These analyses often face major challenges, including the high dimensionality of brain networks, unknown latent sources underlying the observed connectivity, and the large number of brain connections leading to spurious findings. In this talk, we will introduce a novel regularized blind source separation (BSS) framework for reliable mapping of neural circuits underlying static and dynamic brain functional connectome. The proposed LOCUS methods achieve more efficient and reliable source separation for connectivity matrices using low-rank factorization, a novel angle-based sparsity regularization, and a temporal smoothness regularization. We develop a highly efficient iterative Node-Rotation algorithm that solves the non-convex optimization problem for learning LOCUS models. Simulation studies demonstrate that the proposed methods have consistently improved accuracy in retrieving latent connectivity traits. Application of LOCUS methods to the Philadelphia Neurodevelopmental Cohort (PNC) neuroimaging study generates considerably more reproducible findings in revealing underlying neural circuits and their association with demographic and clinical phenotypes, uncovers dynamic expression profiles of the circuits and the synchronization between them, and generates insights on gender differences in the neurodevelopment of brain circuits.
Recommended Article:Ying Guo Professor, Department of Biostatistics and Bioinformatics, Emory University
Dr. Ying Guo is Professor in the Department of Biostatistics and Bioinformatics at Emory University, an appointed Graduate Faculty of the Emory Neuroscience Program and an Associate Faculty in Emory Department of Computer Science. She is a Founding Member and current Director of the Center for Biomedical Imaging Statistics (CBIS) at Emory University. Dr. Guo’s research focus on developing analytical methods for neuroimaging and mental health studies. Her main research areas include statistical methods for agreement and reproducibility studies, brain network analysis, multimodal neuroimaging and imaging-based prediction methods. Dr. Guo is a Fellow of American Statistical Association (ASA) and 2023 Chair for the ASA Statistics in Imaging Section. She is a Standing Member of NIH Emerging Imaging Technologies in Neuroscience (EITN) Study Section and has served on the editorial boards of several scientific journals in statistics and psychiatry.
The prenatal period of life is a time of considerable brain development as the brain emerges from a single cell to a brain that very much resembles an adult brain by the time of birth. This period of extensive growth is also a critical period where environmental factors, such as nutrition, cannabis, cigarette smoking, medication, and other factors can influence brain development. Following birth, the brain undergoes continued brain development and postnatal factors can also have an influence on optimal brain development. This talk will describe the effects of environmental factors during the prenatal and early postnatal period on the developing brain, including the likely role of stochastic processes. During prenatal life the effects of nutrition, substance use, medications, and other environmental exposures on brain structure and function will be discussed within the context of a large, population-based study of child development. An overview of how stochastic events can influence brain development as well as an important hypothesis of how optimizing neurodevelopment during fetal life may, within a population-context, prevent the emergence of psychopathology.
Recommended Article:Tonya White Chief, Section on Social and Cognitive Developmental Neuroscience , National Institute of Mental Health
Tonya White, MD, PhD moved in July 2022 from the Erasmus University in the Netherlands to head the Section on Social and Cognitive Developmental Neuroscience at the National Institute of Mental Health in Bethesda, Maryland. While in Rotterdam, she was professor of Pediatric Population Neuroimaging in the Department of Child and Adolescent Psychiatry and in the Department of Radiology and Nuclear Medicine at Erasmus University Medical Centre. Dr. White's has an eclectic educational background, having received a Bachelor's degree in electrical engineering (Magna Cum Laude) from the University of Utah and Master's degree in electrical engineering from the University of Illinois, Champaign/Urbana. She received her medical degree from the University of Illinois and later a Ph.D. from Erasmus University in Rotterdam, the Netherlands. Following a junior faculty position at the University of Minnesota, she joined the faculty at Erasmus University Medical Center in 2009 to set up and direct a pediatric population neuroimaging program within the Generation R study, which is a large epidemiological study of child development. While at the Erasmus University Medical Centre, her group acquired over 9000 brain imaging of children ranging from 6 to 17 years of age in four waves of data collection. Her primary focus lies in better understanding the underlying neurobiology in children with neurodevelopmental disorders, including autism spectrum disorders. The work that she will present during her talk will stem from her efforts in the Generation R Study.
Neuroimaging has significantly expanded our understanding of brain changes in neuropsychiatric disorders as well as in aging and neurodegenerative diseases. However, it wasn’t until the advent of machine learning tools that imaging signatures that can be detected in individuals, rather than groups, were constructed. This talk will present work on deriving imaging signatures of diagnostic and predictive value. It will then focus on weakly-supervised machine learning methods for analysis of the heterogeneity of brain imaging phenotypes, arriving at a dimensional representation reflecting the heterogeneity of brain aging and of various brain diseases. Finally, international consortia pooling and harmonizing large numbers of brain MRIs from many studies are presented as means for creating sufficiently large datasets for robust machine learning training and heterogeneity analysis, but also pose new challenges, including that or harmonization across studies.
Recommended Article:Christos Davatzikos Wallace T. Miller Sr. Professor of Radiology, University of Pennsylvania
Dr. Christos Davatzikos is the Wallace T. Miller Sr. Professor of Radiology at the University of Pennsylvania, and Director of the recently founded AI2D Center for AI and Data Science for Integrated Diagnostics. He has been the Founding Director of the Center for Biomedical Image Computing and Analytics since 2013, and the director of the AIBIL lab (AI in Biomedical Imaging). He holds a secondary appointment in Electrical and Systems Engineering and in the Division of Informatics at Penn as well as at the Bioengineering an Applied Mathematics graduate groups. He obtained his undergraduate degree by the National Technical University of Athens, Greece in 1989, and his Ph.D. degree from Johns Hopkins, in 1994, on a Fulbright scholarship. He then joined the faculty in Radiology and later in Computer Science, where he founded and directed the Neuroimaging Laboratory. In 2002 he moved to Penn, where he founded and directed the section of biomedical image analysis. Dr. Davatzikos’ interests are in medical image analysis. He oversees a diverse research program ranging from basic problems of imaging pattern analysis and machine learning, to a variety of clinical studies of aging and Alzheimer’s Disease, schizophrenia, brain cancer, and brain development. Dr. Davatzikos has served on a variety of scientific journal editorial boards and grant review committees. He is an IEEE fellow, a fellow of the American Institute for Medical and Biological Engineering, and member of the council of distinguished investigators of the US Academy of Radiology and Biomedical Imaging Research.
The nature of mental illness remains a conundrum. Traditional disease categories are increasingly suspected to misrepresent the causes underlying mental disturbance. Yet psychiatrists and investigators now have an unprecedented opportunity to benefit from complex patterns in brain, behavior, and genes using methods from machine learning (e.g., support vector machines, modern neural-network algorithms, cross-validation procedures). Combining these analysis techniques with a wealth of data from consortia and repositories has the potential to advance a biologically grounded redefinition of major psychiatric disorders. Increasing evidence suggests that data-derived subgroups of psychiatric patients can better predict treatment outcomes than DSM/ICD diagnoses can. In a new era of evidence-based psychiatry tailored to single patients, objectively measurable endophenotypes could allow for early disease detection, individualized treatment selection, and dosage adjustment to reduce the burden of disease. This primer aims to introduce clinicians and researchers to the opportunities and challenges in bringing machine intelligence into psychiatric practice.
Recommended Article:Danilo Bzdok Associate Professor, McGill University
Danilo Bzdok is a medical doctor and computer scientist with a dual background in systems neuroscience and machine learning algorithms. After medical training at RWTH Aachen University (Germany), Université de Lausanne (Switzerland), and Harvard Medical School (USA), he completed one Ph.D. in brain-imaging neuroscience (Research Center Juelich, Germany, 2012) and one Ph.D. in computer science in machine learning statistics at INRIA Saclay and Neurospin (France, 2016). Danilo currently serves as Associate Professor at McGill's Faculty of Medicine and as Canada CIFAR AI Chair at Mila - Quebec Artificial Intelligence Institute, Montreal, Canada.
Invasive and noninvasive brain stimulation methods are applied to focal points in the depth or on the surface of the brain. However, their focal application leads to network effects that are distributed across the entire brain. We can study network effects of focal brain stimulation by pairing them with the human connectome. By doing so, we may investigate which networks need to be stimulated to observe a specific effect. Moreover, we can use brain stimulation sites to segregate the human connectome into functional networks, each tied to specific behaviors, clinical signs or symptoms. One particularly useful method is deep brain stimulation, an invasive neurosurgical procedure that applies highly localized but strong stimulation signals onto specific subcortical areas. In this talk, I will review connectomic effects of deep brain stimulation and other brain stimulation methods. We will cover results in diseases ranging from the movement disorders spectrum (Parkinson’s Disease, Dystonia, Essential Tremor) to neuropsychiatric (Tourette’s & Alzheimer’s Disease) and psychiatric (Obsessive Compulsive Disorder, Depression) diseases. I will also demonstrate how findings in seemingly different diseases (such as Parkinson’s Disease and Depression) could be transferred to cross-inform one another and how the same method can be used to study neurocognitive effects, such as risk-taking behavior or impulsivity.
Recommended Article:
Deep learning has disrupted nearly every major field of study from computer vision to genomics. The unparalleled success of these models has, in many cases, been fueled by an explosion of data. Millions of labeled images, thousands of annotated ICU admissions, and hundreds of hours of transcribed speech are common standards in the literature. Clinical neuroscience is a notable holdout to this trend. It is a field of unavoidably small datasets, massive patient variability, and complex (largely unknown) phenomena. My lab tackles these challenges across a spectrum of projects, from answering foundational neuroscientific questions to translational applications of neuroimaging data to exploratory directions for probing neural circuitry. One of our key strategies is to integrate a priori information about the brain and biology into the model design. This talk will highlight two ongoing projects that epitomize this strategy. First, I will showcase an end-to-end deep learning framework that fuses neuroimaging, genetic, and phenotypic data, while maintaining interpretability of the extracted biomarkers. We use a learnable dropout layer to extract a sparse subset of predictive imaging features and a biologically informed deep network architecture for whole-genome analysis. Specifically, the network uses hierarchical graph convolution that mimic the organization of a well-established gene ontology to track the convergence of genetic risk across biological pathways. Second, I will present a deep-generative hybrid model for epileptic seizure detection from scalp EEG. The latent variables in this model capture the spatiotemporal spread of a seizure; they are complemented by a nonparametric likelihood based on convolutional neural networks. I will also highlight our current end-to-end extensions of this work focused on seizure onset localization.
Recommended Article:Archana Venkataraman Associate Professor, Boston University
Dr. Archana Venkataraman is an Associate Professor of Electrical and Computer Engineering at Boston University. From 2016-2022, she was an Assistant Professor at Johns Hopkins University. Dr. Venkataraman directs the Neural Systems Analysis Laboratory and is affiliated with the Department of Biostatistics, the Department of Biomedical Engineering, the Center for Brain Recovery, and the Rafik B. Hariri Institute for Computing at Boston University. Dr. Venkataraman’s research lies at the intersection of biomedical imaging, artificial intelligence, and clinical neuroscience. Her work has yielded novel insights in to debilitating neurological disorders, such as autism, schizophrenia, and epilepsy, with the long-term goal of improving patient care. Dr. Venkataraman completed her B.S., M.Eng. and Ph.D. in Electrical Engineering at MIT in 2006, 2007 and 2012, respectively. She is a recipient of the MIT Provost Presidential Fellowship, the Siebel Scholarship, the National Defense Science and Engineering Graduate Fellowship, the NIH Advanced Multimodal Neuroimaging Training Grant, numerous best paper awards, and the National Science Foundation CAREER award. Dr. Venkataraman was also named by MIT Technology Review as one of 35 Innovators Under 35 in 2019.